Ghost penalties in nonconvex constrained optimization: Diminishing stepsizes and iteration complexity
نویسندگان
چکیده
We consider, for the first time, general diminishing stepsize methods for nonconvex, constrained optimization problems. We show that by using directions obtained in an SQP-like fashion convergence to generalized stationary points can be proved. In order to do so, we make use of classical penalty functions in an unconventional way. In particular, penalty functions only enter in the theoretical analysis of convergence while the algorithm itself is penalty-free. We then consider the iteration complexity of this method and some variants where the stepsize is either kept constant or decreased according to very simple rules. We establish convergence to δ−approximate stationary points in at most O(δ), O(δ), or O(δ) iterations according to the assumptions made on the problem. These complexity results complement nicely the very few existing results in the field.
منابع مشابه
Asynchronous Gossip-Based Random Projection Algorithms for Fully Distributed Problems
We consider fully distributed constrained convex optimization problems over a network, where each network agent has a distinct objective and constraint set. We discuss a gossipbased random projection algorithm (GRP) with uncoordinated diminishing stepsizes. We prove that, when the problem has a solution, the iterates of all network agents converge to the same optimal point with probability 1.
متن کاملIteration-Complexity of a Linearized Proximal Multiblock ADMM Class for Linearly Constrained Nonconvex Optimization Problems
This paper analyzes the iteration-complexity of a class of linearized proximal multiblock alternating direction method of multipliers (ADMM) for solving linearly constrained nonconvex optimization problems. The subproblems of the linearized ADMM are obtained by partially or fully linearizing the augmented Lagrangian with respect to the corresponding minimizing block variable. The derived comple...
متن کاملQuasi-Newton Methods for Nonconvex Constrained Multiobjective Optimization
Here, a quasi-Newton algorithm for constrained multiobjective optimization is proposed. Under suitable assumptions, global convergence of the algorithm is established.
متن کاملLinearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization
In this paper, we consider a wide class of constrained nonconvex regularized minimization problems, where the constraints are linearly constraints. It was reported in the literature that nonconvex regularization usually yields a solution with more desirable sparse structural properties beyond convex ones. However, it is not easy to obtain the proximal mapping associated with nonconvex regulariz...
متن کاملA High-resolution DOA Estimation Method with a Family of Nonconvex Penalties
The low-rank matrix reconstruction (LRMR) approach is widely used in direction-of-arrival (DOA) estimation. As the rank norm penalty in an LRMR is NP-hard to compute, the nuclear norm (or the trace norm for a positive semidefinite (PSD) matrix) has been often employed as a convex relaxation of the rank norm. However, solving a nuclear norm convex problem may lead to a suboptimal solution of the...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2017